192 research outputs found
A Message Passing Algorithm for the Minimum Cost Multicut Problem
We propose a dual decomposition and linear program relaxation of the NP -hard
minimum cost multicut problem. Unlike other polyhedral relaxations of the
multicut polytope, it is amenable to efficient optimization by message passing.
Like other polyhedral elaxations, it can be tightened efficiently by cutting
planes. We define an algorithm that alternates between message passing and
efficient separation of cycle- and odd-wheel inequalities. This algorithm is
more efficient than state-of-the-art algorithms based on linear programming,
including algorithms written in the framework of leading commercial software,
as we show in experiments with large instances of the problem from applications
in computer vision, biomedical image analysis and data mining.Comment: Added acknowledgment
MAP inference via Block-Coordinate Frank-Wolfe Algorithm
We present a new proximal bundle method for Maximum-A-Posteriori (MAP)
inference in structured energy minimization problems. The method optimizes a
Lagrangean relaxation of the original energy minimization problem using a multi
plane block-coordinate Frank-Wolfe method that takes advantage of the specific
structure of the Lagrangean decomposition. We show empirically that our method
outperforms state-of-the-art Lagrangean decomposition based algorithms on some
challenging Markov Random Field, multi-label discrete tomography and graph
matching problems
Maximum Persistency via Iterative Relaxed Inference with Graphical Models
We consider the NP-hard problem of MAP-inference for undirected discrete
graphical models. We propose a polynomial time and practically efficient
algorithm for finding a part of its optimal solution. Specifically, our
algorithm marks some labels of the considered graphical model either as (i)
optimal, meaning that they belong to all optimal solutions of the inference
problem; (ii) non-optimal if they provably do not belong to any solution. With
access to an exact solver of a linear programming relaxation to the
MAP-inference problem, our algorithm marks the maximal possible (in a specified
sense) number of labels. We also present a version of the algorithm, which has
access to a suboptimal dual solver only and still can ensure the
(non-)optimality for the marked labels, although the overall number of the
marked labels may decrease. We propose an efficient implementation, which runs
in time comparable to a single run of a suboptimal dual solver. Our method is
well-scalable and shows state-of-the-art results on computational benchmarks
from machine learning and computer vision.Comment: Reworked version, submitted to PAM
Combinatorial persistency criteria for multicut and max-cut
In combinatorial optimization, partial variable assignments are called
persistent if they agree with some optimal solution. We propose persistency
criteria for the multicut and max-cut problem as well as fast combinatorial
routines to verify them. The criteria that we derive are based on mappings that
improve feasible multicuts, respectively cuts. Our elementary criteria can be
checked enumeratively. The more advanced ones rely on fast algorithms for upper
and lower bounds for the respective cut problems and max-flow techniques for
auxiliary min-cut problems. Our methods can be used as a preprocessing
technique for reducing problem sizes or for computing partial optimality
guarantees for solutions output by heuristic solvers. We show the efficacy of
our methods on instances of both problems from computer vision, biomedical
image analysis and statistical physics
Higher-order Projected Power Iterations for Scalable Multi-Matching
The matching of multiple objects (e.g. shapes or images) is a fundamental
problem in vision and graphics. In order to robustly handle ambiguities, noise
and repetitive patterns in challenging real-world settings, it is essential to
take geometric consistency between points into account. Computationally, the
multi-matching problem is difficult. It can be phrased as simultaneously
solving multiple (NP-hard) quadratic assignment problems (QAPs) that are
coupled via cycle-consistency constraints. The main limitations of existing
multi-matching methods are that they either ignore geometric consistency and
thus have limited robustness, or they are restricted to small-scale problems
due to their (relatively) high computational cost. We address these
shortcomings by introducing a Higher-order Projected Power Iteration method,
which is (i) efficient and scales to tens of thousands of points, (ii)
straightforward to implement, (iii) able to incorporate geometric consistency,
(iv) guarantees cycle-consistent multi-matchings, and (iv) comes with
theoretical convergence guarantees. Experimentally we show that our approach is
superior to existing methods
FastDOG: Fast Discrete Optimization on GPU
We present a massively parallel Lagrange decomposition method for solving
0--1 integer linear programs occurring in structured prediction. We propose a
new iterative update scheme for solving the Lagrangean dual and a perturbation
technique for decoding primal solutions. For representing subproblems we follow
Lange et al. (2021) and use binary decision diagrams (BDDs). Our primal and
dual algorithms require little synchronization between subproblems and
optimization over BDDs needs only elementary operations without complicated
control flow. This allows us to exploit the parallelism offered by GPUs for all
components of our method. We present experimental results on combinatorial
problems from MAP inference for Markov Random Fields, quadratic assignment and
cell tracking for developmental biology. Our highly parallel GPU implementation
improves upon the running times of the algorithms from Lange et al. (2021) by
up to an order of magnitude. In particular, we come close to or outperform some
state-of-the-art specialized heuristics while being problem agnostic. Our
implementation is available at https://github.com/LPMP/BDD.Comment: Published at CVPR 2022. Alert before printing: last 10 pages just
contains detailed results tabl
Combinatorial Optimization for Panoptic Segmentation: A Fully Differentiable Approach
We propose a fully differentiable architecture for simultaneous semantic and
instance segmentation (a.k.a. panoptic segmentation) consisting of a
convolutional neural network and an asymmetric multiway cut problem solver. The
latter solves a combinatorial optimization problem that elegantly incorporates
semantic and boundary predictions to produce a panoptic labeling. Our
formulation allows to directly maximize a smooth surrogate of the panoptic
quality metric by backpropagating the gradient through the optimization
problem. Experimental evaluation shows improvement by backpropagating through
the optimization problem w.r.t. comparable approaches on Cityscapes and COCO
datasets. Overall, our approach shows the utility of using combinatorial
optimization in tandem with deep learning in a challenging large scale
real-world problem and showcases benefits and insights into training such an
architecture.Comment: To be presented at NeurIPS 202
ClusterFuG: Clustering Fully connected Graphs by Multicut
We propose a graph clustering formulation based on multicut (a.k.a. weighted
correlation clustering) on the complete graph. Our formulation does not need
specification of the graph topology as in the original sparse formulation of
multicut, making our approach simpler and potentially better performing. In
contrast to unweighted correlation clustering we allow for a more expressive
weighted cost structure. In dense multicut, the clustering objective is given
in a factorized form as inner products of node feature vectors. This allows for
an efficient formulation and inference in contrast to multicut/weighted
correlation clustering, which has at least quadratic representation and
computation complexity when working on the complete graph. We show how to
rewrite classical greedy algorithms for multicut in our dense setting and how
to modify them for greater efficiency and solution quality. In particular, our
algorithms scale to graphs with tens of thousands of nodes. Empirical evidence
on instance segmentation on Cityscapes and clustering of ImageNet datasets
shows the merits of our approach.Comment: ICML 202
- …